On Neural Network Classifiers with Supervised Training
نویسندگان
چکیده
A study on classification capability of neural networks is presented, considering two types of architectures with supervised training, namely Multilayer Perceptron (MLP) and Radial-Basis Function (RBF). To illustrate the classifiers’ construction, we have chosen a problem that occurs in real-life experiments, when one needs to distinguish between overlapping and Gaussian distributed classes. An amply commented comparative study is elaborated between MLPand RBF-type classifiers, in order to reveal advantages and disadvantages encountered when the two types of neural network architectures are used.
منابع مشابه
Learning Document Image Features With SqueezeNet Convolutional Neural Network
The classification of various document images is considered an important step towards building a modern digital library or office automation system. Convolutional Neural Network (CNN) classifiers trained with backpropagation are considered to be the current state of the art model for this task. However, there are two major drawbacks for these classifiers: the huge computational power demand for...
متن کاملطبقه بندی و شناسایی رخسارههای زمینشناسی با استفاده از دادههای لرزه نگاری و شبکههای عصبی رقابتی
Geological facies interpretation is essential for reservoir studying. The method of classification and identification seismic traces is a powerful approach for geological facies classification and distinction. Use of neural networks as classifiers is increasing in different sciences like seismic. They are computer efficient and ideal for patterns identification. They can simply learn new algori...
متن کاملProbabilistic Neural Network Training for Semi-Supervised Classifiers
In this paper, we propose another version of help-training approach by employing a Probabilistic Neural Network (PNN) that improves the performance of the main discriminative classifier in the semi-supervised strategy. We introduce the PNN-training algorithm and use it for training the support vector machine (SVM) with a few numbers of labeled data and a large number of unlabeled data. We try t...
متن کاملNeural Net and Traditional Classifiers
Previous work on nets with continuous-valued inputs led to generative procedures to construct convex decision regions with two-layer perceptrons (one hidden layer) and arbitrary decision regions with three-layer perceptrons (two hidden layers). Here we demonstrate that two-layer perceptron classifiers trained with back propagation can form both convex and disjoint decision regions. Such classif...
متن کاملAn algorithmic method to build good training sets for neural-network classifiers
To classify complex patterns using aneural network and a supervised training algorithm, one needs a complete training set that represents all possible characteristics of the examined problem. Building good training sets for neural network classifiers can be very difficult, especially if the problem involves complex patterns barely feasible by human perception. This paper proposes an algorithmic...
متن کامل